Reading up the ABM Synthesis paper and trying to come up with a simple example of how satisficing can change the outcome of a fishery even when everything else (catchability, # of boats and so on) is exactly the same. The current draft, hinting at something else Matt’s being working on, makes some comparisons regarding fishing effort or catches per tow but I think we can get some analysis going just by thinking of fishing location (after all this means I work less, plus it shows the importance of using the spatial dimension).
Okay so the key idea here is easy to understand with an example. You all know, and love, the fishing front example. Basically you get 100 profit maximizing boats up and running, they kill off all the fish near port and then move on further and further out (either reaching an equilibrium or killing off all the fish depending on parameters). What happens if instead you make those 100 boats fish completely at random?
There is obviously no local depletion since the boats do not concentrate on any one spot. Moreover since most spots retain most of the fish they all produce quite a lot of new biomass each year. So that biomass actually fares a lot better. Neat.
Now with satisficing you don’t get pure randomness (unless the thresholds are stupid high, in which case you do because people are indiferrent towards anything). However with satisficing you often get very little imitation (since your friend might be making more money but you don’t mind ). This ends up creating again little local depletion which in turns result in very different total biomass dynamics (even though the effort is the same).
Now, the way I see it, after talking to Matt, there are really 4 “simple” satisficing strategies. Two involve modifying the profit function:
The other two involve modifying the exploration rate instead. So imagine turning off imitation and focusing on pure explore-exploit.
These last two are more “procedural” (since they change the behaviour rather than the utility function) but they are very much like the “reservation utility” concept of Caplin,Dean,Martin on a 2011 paper in AER.
Run the model 25 times, 10 years each, every agent being explore-exploit-imitate. Vary only the upper utility threshold (the threshold is in $/hr out and the utility is used to judge trip quality). The result is relatively obvious: greed is bad. There is a critical upper threshold (between 6 and 8 ) whereby agents act end up screwing the environment as if profit maximizers (the “None” column). If people satisfy themselves with little, there is a lot more biomass for everyone. This is however not due to conservation per se but just by not depleting any single best spot at once.
Lower threshold carries more or less the same lesson. If your threshold is too low, that is if you are very greedy, then you end up acting very much like a profit maximizer. The dynamic is a little bit less intuitive however because what really happens is that if your lower threshold is very high, say 12$/hr, then initially you do act like a profit maximizer until all the areas that can net you 12$/hr are gone after which you act quite randomly.
The morale is slightly different when we move to threshold to exploration rather than satisfaction. In this case being satisfied with everything (very low threshold) or with nothing (very high threshold) are equivalent in aggregate. Very low threshold means no exploration so everybody sticks to the spot they are at (until that is depleted and then move on). Very high threshold means always exploring which also doesn’t really destroy any single part of the map.
Social Annealing
With social annealing being satisfied with a lot less than the average profits tend to result in no local depletion and higher biomass. Being satisfied with nothing compared to average tend also to perform well for the environment because it pushes people to keep exploring. Wanting just a bit more than the average is associated with most depletion.